13 research outputs found

    Learning by teaching a robot: The case of handwriting

    Get PDF
    © 1994-2011 IEEE. Thomas (all children's names have been changed) is five and a half years old and has been diagnosed with visuoconstructive deficits. He is under the care of an occupational therapist and tries to work around his inability to draw letters in a consistent manner. Vincent is six and struggles at school with his poor handwriting and even poorer self-confidence. Whereas Thomas is lively and always quick at shifting his attention from one activity to another, Vincent is shy and poised. Two very different children, each one facing the same difficulty to write in a legible manner. Additionally, hidden beyond these impaired skills, psychosocial difficulties arise: they underperform at school. Thomas has to go for follow-up visits every week, and they both live under the label of «special care.» This is a source of anxiety for the children and for their parents alike

    C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining

    Full text link
    Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy's performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments

    Expressing Motivations By Facilitating Other’s Inverse Reinforcement Learning

    Get PDF
    It is often necessary to understand each other’s motivations in order to cooperate. Reaching such a mutual understanding requires two abilities: to build models of other’s motivations in order to understand them, and to build a model of “my” motivations perceived by others in order to be understood. Having a self-image seen by others requires two recursive orders of modeling, known in psychology as the first and second orders of theory of mind. In this paper, we present a second-order theory of mind cognitive architecture that aims to facilitate mutual understanding in multi-agent scenarios. We study different conditions of empathy and gratitude leading to irrational cooperation in iterated prisoner’s dilemma

    Child-Robot Spatial Arrangement in a Learning by Teaching Activity

    Get PDF
    In this paper, we present an experiment in the context of a child-robot interaction where we study the influence of the child-robot spatial arrangement on the child’s focus of attention and the perception of the robot’s performance. In the “Co-Writer learning by teaching” activity, the child teaches a Nao robot how to handwrite. Usually only face-to-face spatial arrangements are tested in educational child robot interactions, but we explored two spatial conditions from Kendon’s F-formation, the side-by-side and the face-to-face formations in a within subject experiment. We estimated the gaze behavior of the child and their consistency in grading the robot with regard to the robot’s progress in writing. Even-though the demonstrations provided by children were not different between the two conditions (i.e. the robot’s learning didn’t differ), the results showed that in the side-by-side condition children tended to be more indulgent with the robot’s mistakes and to give it better feedback. These results highlight the influence of experimental choices in child-robot interaction

    From Real-time Attention Assessment to “With-me-ness” in Human-Robot Interaction

    Get PDF
    Measuring ``how much the human is in the interaction'' -- the level of engagement -- is instrumental in building effective interactive robots. Engagement, however, is a complex, multi-faceted cognitive mechanism that is only indirectly observable. This article formalizes with-me-ness as one of such indirect measures. With-me-ness, a concept borrowed from the field of Computer-Supported Collaborative Learning, measures in a well-defined way to what extent the human is with the robot over the course of an interactive task. As such, it is a meaningful precursor of engagement. We expose in this paper the full methodology, from real-time estimation of the human's focus of attention (relying on a novel, open-source, vision-based head pose estimator), to on-line computation of with-me-ness. We report as well on the experimental validation of this approach, using a naturalistic setup involving children during a complex robot-teaching task

    Cognitive Architecture for Mutual Modelling

    Get PDF
    In social robotics, robots needs to be able to be understood by humans. Especially in collaborative tasks where they have to share mutual knowledge. For instance, in an educative scenario, learners share their knowledge and they must adapt their behaviour in order to make sure they are understood by others. Learners display behaviours in order to show their understanding and teachers adapt in order to make sure that the learners' knowledge is the required one. This ability requires a model of their own mental states perceived by others: ``has the human understood that I(robot) need this object for the task or should I explain it once again ?" In this paper, we discuss the importance of a cognitive architecture enabling second-order Mutual Modelling for Human-Robot Interaction in educative context

    Building Successful Long Child-Robot Interactions in a Learning Context

    Get PDF
    The CoWriter activity involves a child in a rich and complex interaction where he has to teach handwriting to a robot. The robot must convince the child it needs his help and it actually learns from his lessons. To keep the child engaged, the robot must learn at the right rate, not too fast otherwise the kid will have no opportunity for improving his skills and not too slow otherwise he may loose trust in his ability to improve the robot' skills. We tested this approach in real pedagogic/therapeutic contexts with children in difficulty over repeated long sessions (40-60 min). Through 3 different case studies, we explored and refined experimental designs and algorithms in order for the robot to adapt to the troubles of each child and to promote their motivation and self-confidence. We report positive observations, suggesting commitment of children to help the robot, and their comprehension that they were good enough to be teachers, overcoming their initial low confidence with handwriting

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Mutual Understanding in Educational Human-Robot Collaborations

    No full text
    Education is an art close to theater. A teacher is taking a role; he works his speeches and his gestures and he plays with the attention of his audience. But it is harder: more than entertaining, a teacher must shape the skills, the knowledge and the motivation of his students. This requires, more than just understanding the learning dynamic of students, the talent to control the way he is understood so he can manipulate this learning dynamic. We call it mutual understanding, formalized by the accuracy of the prediction of others and of the prediction of oneself by the others. Robots for education, a field that emerges from novel approaches involving new technologies, opens a large horizon of unexplored pedagogical activities. Indeed, robots can take roles that were not doable by humans. For example, CoWriter is a robot that personifies a very unskilled beginner so even a child with strong difficulties can teach it handwriting: involving an adult would not be convincing and calling another child would be unethical for this role. However, a strong limitation lies in the fact that robots have a restricted perception to understand humans and are hardly understandable by humans. By consequence, robots for education suffer the poor -- even nonexistent -- level of mutual understanding required by educational interactions. The first part of this thesis highlights the importance of the human-robot mutual understanding in pedagogical collaborative activities like CoWriter and is based on real-world experimentation. The next two parts form a suggestion to implement such an ability in a robot aiming to interact with humans by focusing on the modelling of motivations. One part regards the external orchestration of the different models built by the robot to make predictions and to be predictable. The other part focuses on the internal mechanisms of these models, based on the computational framework of reinforcement learning
    corecore